instant selection - определение. Что такое instant selection
Diclib.com
Словарь онлайн

Что (кто) такое instant selection - определение

PROCEDURE IN MACHINE LEARNING AND STATISTICS
Input selection; Feature selection problem; Variable selection; Feature subset selection
  • Embedded method for Feature selection
  • Wrapper Method for Feature selection
  • Filter Method for feature selection

Feature selection         
In machine learning and statistics, feature selection, also known as variable selection, attribute selection or variable subset selection, is the process of selecting a subset of relevant features (variables, predictors) for use in model construction. Feature selection techniques are used for several reasons:
Facebook Instant Articles         
FEATURE OF THE SOCIAL NETWORKING WEBSITE FACEBOOK
Instant Articles; Facebook Instant
Facebook Instant Articles is a feature from social networking company Facebook for use with collaborating news and content publishers, that the publisher can choose to use for articles they select. When a publisher selects an article for Instant Articles, people browsing Facebook in its mobile app can see the entire article within Facebook's app, with formatting very similar to that on the publisher's website.
Material selection         
  • '''Figure 3.''' Ashby chart with performance indices plotted for maximum result
SELECTION PROCESS OF A MATERIAL FOR A PARTICULAR APPLICATION
Materials selection
Material selection is a step in the process of designing any physical object. In the context of product design, the main goal of material selection is to minimize cost while meeting product performance goals.

Википедия

Feature selection

In machine learning and statistics, feature selection, also known as variable selection, attribute selection or variable subset selection, is the process of selecting a subset of relevant features (variables, predictors) for use in model construction. Feature selection techniques are used for several reasons:

  • simplification of models to make them easier to interpret by researchers/users,
  • shorter training times,
  • to avoid the curse of dimensionality,
  • improve data's compatibility with a learning model class,
  • encode inherent symmetries present in the input space.

The central premise when using a feature selection technique is that the data contains some features that are either redundant or irrelevant, and can thus be removed without incurring much loss of information. Redundant and irrelevant are two distinct notions, since one relevant feature may be redundant in the presence of another relevant feature with which it is strongly correlated.

Feature selection techniques should be distinguished from feature extraction. Feature extraction creates new features from functions of the original features, whereas feature selection returns a subset of the features. Feature selection techniques are often used in domains where there are many features and comparatively few samples (or data points). Archetypal cases for the application of feature selection include the analysis of written texts and DNA microarray data, where there are many thousands of features, and a few tens to hundreds of samples.